Learned Codecs, AI Compression, Rate-Distortion Theory, Entropy Models
SlimMoE: Structured Compression of Large MoE Models via Expert Slimming and Distillation
arxiv.org·1d
Why Your Next LLM Might Not Have A Tokenizer
towardsdatascience.com·18h
Brain Mapping with Dense Features: Grounding Cortical Semantic Selectivity in Natural Images With Vision Transformers
arxiv.org·10h
Loading...Loading more...